73 research outputs found

    ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.

    Get PDF
    Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. © 2012 IEEE

    Less can me more: micro-managing VMs in Amazon EC2

    No full text
    © 2015 IEEE.Micro instances (t1. micro) are the class of Amazon EC2 virtual machines (VMs) offering the lowest operational costs for applications with short bursts in their CPU requirements. As processing proceeds, EC2 throttles CPU capacity of micro instances in a complex, unpredictable, manner. This paper aims at making micro instances more predictable and efficient to use. First, we present a characterization of EC2 micro instances that evaluates the complex interactions between cost, performance, idleness and CPU throttling. Next, we define adaptive algorithms to manage CPU consumption by learning the workload characteristics at runtime and by injecting idleness to diminish host-level throttling. We show that a gradient-hill strategy leads to favorable results. For CPU bound workloads, we observe that a significant portion of jobs (up to 65%) can have end-to-end times that are even four times shorter than those of the more expensive m1. small class. Our algorithms drastically reduce the long tails of job execution times on the micro instances, resulting to favorable comparisons against even small instances

    Verbal and non-verbal recognition memory assessment: validation of a computerized version of the Recognition Memory Test

    Get PDF
    Background: The use of computerized devices for neuropsychological assessment (CNADs) as an effective alternative to the traditional pencil-and-paper modality has recently increased exponentially, both in clinical practice and research, especially due to the pandemic. However, several authors underline that the computerized modality requires the same psychometric validity as "in-presence" tests. The current study aimed at building and validating a computerized version of the verbal and non-verbal recognition memory test (RMT) for words, unknown faces and buildings. Methods: Seventy-two healthy Italian participants, with medium-high education and ability to proficiently use computerized systems, were enrolled. The sample was subdivided into six groups, one for each age decade. Twelve neurological patients with mixed aetiology, age and educational level were also recruited. Both the computerized and the paper-and-pencil versions of the RMT were administered in two separate sessions. Results: In healthy participants, the computerized and the paper-and-pencil versions of the RMT showed statistical equivalence for words, unknown faces and buildings. In the neurological patients, no statistical difference was found between the performance at the two versions of the RMT. A moderate-to-good inter-rater reliability between the two versions was also found in both samples. Finally, the computerized version of the RMT was perceived as acceptable by both healthy participants and neurological patients at System Usability Scale (SUS). Conclusion: The computerized version of the RMT can be used as a reliable alternative to the traditional version

    Measuring the Effects of Thread Placement on the Kendall Square KSR1

    Get PDF
    This paper describes a measurement study of the effects of thread placement on memory access times on the Kendall Square multiprocessor, the KSRl. The KSRl uses a conventional shared memory programming model in a distributed memory architecture. The architecture is based on a ring of rings of 64-bit superscalar microprocessors. The KSRl has a Cache-Only Memory Architecture (COMA). Memory consists of the local cache memoria attached to each processor. Whenever an address is accessed, the data item is automatically copied to the local cache memory module, 80 that access times for subsequent references will be minimal. If a local cache has space allocated for a particular data item, but does not have a current valid copy of that data item, then it is possible for the cache to acquire a valid read-only copy before it is requested by the local processor due to a request by a different processor that happens to pass by on the ring. This automatic prefetching can greatly reduce the average time for a thread to acquire data items. Because of the automatic prefetching, the time required to obtain a valid copy of a data item does not depend simply on the distance from the owner of the data item, but also depends on the placement and number of other processing threads which ehare the same data item. Also, the strategic placement of processing threads helps programs take advantage of the unique features of the memory architecture which help eliminate memory access bottlenecks for shared data sets. Experiments run on the KSRl across a wide variety of thread configurations show that shared memory access is accelerated through strategic placement of threads which share data. The results indicate strategies for improving the performance of applications programs, and illustrate that KSRl memory access times can remain nearly constant even when the number of participating threads increases

    How to supercharge the Amazon T2: observations and suggestions

    Get PDF
    Cloud service providers adopt a credit system to allow users to obtain periods of performance bursts without additional cost. For example, the Amazon EC2 T2 instance offers low baseline performance and the capability to achie ve short periods of high performance using CPU credits. Once a T2 instance is created and assigned some initial credits, wh ile its CPU utilization is above the baseline threshold, there i s a transient period where performance is boosted and the assig ned CPU credits are used. After all credits are used, the maximum achievable performance drops to baseline. Credits accrue p e- riodically, when the instance utilization is below the base line threshold. This paper proposes a methodology to increase th e performance benefits of T2 by seamlessly extending the dura tion of the transient period while maintaining high performance . This extension of the high performance transient period is combi ned with proactive migration to further take advantage of the in itially assigned credits. We conduct experiments to demonstrate th e benefits of this methodology for both single-tier and multi -tier applications

    Model-Driven System Capacity Planning under Workload Burstiness

    Get PDF
    In this paper, we define and study a new class of capacity planning models called MAP queueing networks. MAP queueing networks provide the first analytical methodology to describe and predict accurately the performance of complex systems operating under bursty workloads, such as multi-tier architectures or storage arrays. Burstiness is a feature that significantly degrades system performance and that cannot be captured explicitly by existing capacity planning models. MAP queueing networks address this limitation by describing computer systems as closed networks of servers whose service times are Markovian Arrival Processes (MAPs), a class of Markov-modulated point processes that can model general distributions and burstiness. In this paper, we show that MAP queueing networks provide reliable performance predictions even if the service processes are bursty. We propose a methodology to solve MAP queueing networks by two state space transformations, which we call Linear Reduction (LR) and Quadratic Reduction (QR). These transformations dramatically decrease the number of states in the underlying Markov chain of the queueing network model. From these reduced state spaces, we obtain two classes of bounds on arbitrary performance indexes, e.g., throughput, response time, utilizations. Numerical experiments show that LR an QR bounds achieve good accuracy. We also illustrate the high effectiveness of the LR and QR bounds in the performance analysis of a real multi-tier architecture subject to TPC-W workloads that are characterized as bursty. These results promote MAP queueing networks as a new robust class of capacity planning models

    Dealing with Burstiness in Multi-Tier Applications: Models and Their Parameterization

    No full text
    Abstract—Workloads and resource usage patterns in enterprise applications often show burstiness resulting in large degradation of the perceived user performance. In this paper, we propose a methodology for detecting burstiness symptoms in multi-tier applications but, rather than identifying the root cause of burstiness, we incorporate this information into models for performance prediction. The modeling methodology is based on the index of dispersion of the service process at a server, which is inferred by observing the number of completions within the concatenated busy times of that server. The index of dispersion is used to derive a Markov-modulated process that captures well burstiness and variability of the service process at each resource and that allows us to define queueing network models for performance prediction. Experimental results and performance model predictions are in excellent agreement and argue for the effectiveness of the proposed methodology under both bursty and non-bursty workloads. Furthermore, we show that the methodology extends to modeling flash crowds that create burstiness in the stream of requests incoming to the application. Index Terms—Capacity planning, multi-tier applications, bursty workload, bottleneck switch, index of dispersion.

    Performance-Guided Load (Un)balancing under Autocorrelated Flows

    Full text link

    Childhood obesity and maternal personality traits: A new point of view on obesity behavioural aspects

    Get PDF
    The epidemic spread of childhood obesity in Western society has interested many researchers, who agree in defining it as a multifactorial disease in which not only eating habits and sedentary lifestyle play a role, but also genetic predisposition. The aim of this study was to analyze the personality profile of a group of mothers of children with obesity and to compare this profile to that of a group of mothers of children without obesity. A total of 258 mothers participated in the study (126 mothers of children with obesity and 132 mothers of children without obesity). Weight and height were measured and the body mass index was calculated. The Minnesota Multiphasic Personality Inventory second edition (MMPI-2), evaluating personality and psychological disorders, was used to evaluate the personality profile. The results suggested that mothers of children with obesity score higher than the mothers of children without obesity in all MMPI-2 subscales. In most of these subscales, the differences between the two groups of mothers were statistically significant and with a medium to high effect size. These data suggest a new perspective on childhood obesity, identifying it as a multifactorial pathology that requires a multimodal and multidisciplinary approach that also takes care of caregivers to ensure optimal therapeutic efficacy

    Lessons from Discarded Computer Architectures

    Full text link
    • …
    corecore